-
Notifications
You must be signed in to change notification settings - Fork 15.3k
[LV] Check full partial reduction chains in order. #168036
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@llvm/pr-subscribers-llvm-transforms @llvm/pr-subscribers-vectorizers Author: Florian Hahn (fhahn) Changes#162822 added another validation step to check if entries in a partial reduction chain have the same scale factor. But the validation was still dependent on the order of entries in PartialReductionChains, and would fail to reject some cases (e.g. if the first first link matched the scale of the second link, but the second link is invalidated later). To fix that, group chains by their starting phi nodes, then perform the validation for each chain, and if it fails, invalidate the whole chain for the phi. Full diff: https://github.com/llvm/llvm-project/pull/168036.diff 2 Files Affected:
diff --git a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
index 9f0d6fcb237ef..1a88fa28258f3 100644
--- a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
+++ b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
@@ -7975,13 +7975,13 @@ VPReplicateRecipe *VPRecipeBuilder::handleReplication(VPInstruction *VPI,
/// Find all possible partial reductions in the loop and track all of those that
/// are valid so recipes can be formed later.
void VPRecipeBuilder::collectScaledReductions(VFRange &Range) {
- // Find all possible partial reductions.
- SmallVector<std::pair<PartialReductionChain, unsigned>>
- PartialReductionChains;
- for (const auto &[Phi, RdxDesc] : Legal->getReductionVars()) {
+ // Find all possible partial reductions, grouping chains by their PHI.
+ MapVector<Instruction *,
+ SmallVector<std::pair<PartialReductionChain, unsigned>>>
+ ChainsByPhi;
+ for (const auto &[Phi, RdxDesc] : Legal->getReductionVars())
getScaledReductions(Phi, RdxDesc.getLoopExitInstr(), Range,
- PartialReductionChains);
- }
+ ChainsByPhi[Phi]);
// A partial reduction is invalid if any of its extends are used by
// something that isn't another partial reduction. This is because the
@@ -7989,8 +7989,9 @@ void VPRecipeBuilder::collectScaledReductions(VFRange &Range) {
// Build up a set of partial reduction ops for efficient use checking.
SmallPtrSet<User *, 4> PartialReductionOps;
- for (const auto &[PartialRdx, _] : PartialReductionChains)
- PartialReductionOps.insert(PartialRdx.ExtendUser);
+ for (const auto &[_, Chains] : ChainsByPhi)
+ for (const auto &[PartialRdx, _] : Chains)
+ PartialReductionOps.insert(PartialRdx.ExtendUser);
auto ExtendIsOnlyUsedByPartialReductions =
[&PartialReductionOps](Instruction *Extend) {
@@ -8001,31 +8002,41 @@ void VPRecipeBuilder::collectScaledReductions(VFRange &Range) {
// Check if each use of a chain's two extends is a partial reduction
// and only add those that don't have non-partial reduction users.
- for (auto Pair : PartialReductionChains) {
- PartialReductionChain Chain = Pair.first;
- if (ExtendIsOnlyUsedByPartialReductions(Chain.ExtendA) &&
- (!Chain.ExtendB || ExtendIsOnlyUsedByPartialReductions(Chain.ExtendB)))
- ScaledReductionMap.try_emplace(Chain.Reduction, Pair.second);
+ for (const auto &[_, Chains] : ChainsByPhi) {
+ for (const auto &[Chain, Scale] : Chains) {
+ if (ExtendIsOnlyUsedByPartialReductions(Chain.ExtendA) &&
+ (!Chain.ExtendB ||
+ ExtendIsOnlyUsedByPartialReductions(Chain.ExtendB)))
+ ScaledReductionMap.try_emplace(Chain.Reduction, Scale);
+ }
}
// Check that all partial reductions in a chain are only used by other
// partial reductions with the same scale factor. Otherwise we end up creating
// users of scaled reductions where the types of the other operands don't
// match.
- for (const auto &[Chain, Scale] : PartialReductionChains) {
- auto AllUsersPartialRdx = [ScaleVal = Scale, this](const User *U) {
- auto *UI = cast<Instruction>(U);
- if (isa<PHINode>(UI) && UI->getParent() == OrigLoop->getHeader()) {
- return all_of(UI->users(), [ScaleVal, this](const User *U) {
- auto *UI = cast<Instruction>(U);
- return ScaledReductionMap.lookup_or(UI, 0) == ScaleVal;
- });
+ for (const auto &[Phi, Chains] : ChainsByPhi) {
+ for (const auto &[Chain, Scale] : Chains) {
+ auto AllUsersPartialRdx = [ScaleVal = Scale, this](const User *U) {
+ auto *UI = cast<Instruction>(U);
+ if (isa<PHINode>(UI) && UI->getParent() == OrigLoop->getHeader()) {
+ return all_of(UI->users(), [ScaleVal, this](const User *U) {
+ auto *UI = cast<Instruction>(U);
+ return ScaledReductionMap.lookup_or(UI, 0) == ScaleVal;
+ });
+ }
+ return ScaledReductionMap.lookup_or(UI, 0) == ScaleVal ||
+ !OrigLoop->contains(UI->getParent());
+ };
+
+ // If any partial reduction entry for the phi is invalid, invalidate the
+ // whole chain.
+ if (!all_of(Chain.Reduction->users(), AllUsersPartialRdx)) {
+ for (const auto &[Chain, _] : Chains)
+ ScaledReductionMap.erase(Chain.Reduction);
+ break;
}
- return ScaledReductionMap.lookup_or(UI, 0) == ScaleVal ||
- !OrigLoop->contains(UI->getParent());
- };
- if (!all_of(Chain.Reduction->users(), AllUsersPartialRdx))
- ScaledReductionMap.erase(Chain.Reduction);
+ }
}
}
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/partial-reduce-incomplete-chains.ll b/llvm/test/Transforms/LoopVectorize/AArch64/partial-reduce-incomplete-chains.ll
index fffab238798e3..47a5a6dc9b384 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/partial-reduce-incomplete-chains.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/partial-reduce-incomplete-chains.ll
@@ -125,4 +125,118 @@ exit:
ret i16 %red.next
}
+
+define void @chained_sext_adds(ptr noalias %src, ptr noalias %dst) #0 {
+; CHECK-NEON-LABEL: define void @chained_sext_adds(
+; CHECK-NEON-SAME: ptr noalias [[SRC:%.*]], ptr noalias [[DST:%.*]]) #[[ATTR1]] {
+; CHECK-NEON-NEXT: [[ENTRY:.*:]]
+; CHECK-NEON-NEXT: br label %[[VECTOR_PH:.*]]
+; CHECK-NEON: [[VECTOR_PH]]:
+; CHECK-NEON-NEXT: br label %[[VECTOR_BODY:.*]]
+; CHECK-NEON: [[VECTOR_BODY]]:
+; CHECK-NEON-NEXT: [[INDEX:%.*]] = phi i64 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[VECTOR_BODY]] ]
+; CHECK-NEON-NEXT: [[VEC_PHI:%.*]] = phi <4 x i32> [ zeroinitializer, %[[VECTOR_PH]] ], [ [[PARTIAL_REDUCE1:%.*]], %[[VECTOR_BODY]] ]
+; CHECK-NEON-NEXT: [[TMP0:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[INDEX]]
+; CHECK-NEON-NEXT: [[WIDE_LOAD:%.*]] = load <16 x i8>, ptr [[TMP0]], align 1
+; CHECK-NEON-NEXT: [[TMP1:%.*]] = sext <16 x i8> [[WIDE_LOAD]] to <16 x i32>
+; CHECK-NEON-NEXT: [[PARTIAL_REDUCE:%.*]] = call <4 x i32> @llvm.vector.partial.reduce.add.v4i32.v16i32(<4 x i32> [[VEC_PHI]], <16 x i32> [[TMP1]])
+; CHECK-NEON-NEXT: [[PARTIAL_REDUCE1]] = call <4 x i32> @llvm.vector.partial.reduce.add.v4i32.v16i32(<4 x i32> [[PARTIAL_REDUCE]], <16 x i32> [[TMP1]])
+; CHECK-NEON-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 16
+; CHECK-NEON-NEXT: [[TMP2:%.*]] = icmp eq i64 [[INDEX_NEXT]], 992
+; CHECK-NEON-NEXT: br i1 [[TMP2]], label %[[MIDDLE_BLOCK:.*]], label %[[VECTOR_BODY]], !llvm.loop [[LOOP6:![0-9]+]]
+; CHECK-NEON: [[MIDDLE_BLOCK]]:
+; CHECK-NEON-NEXT: [[TMP3:%.*]] = call i32 @llvm.vector.reduce.add.v4i32(<4 x i32> [[PARTIAL_REDUCE1]])
+; CHECK-NEON-NEXT: store i32 [[TMP3]], ptr [[DST]], align 4
+; CHECK-NEON-NEXT: br label %[[SCALAR_PH:.*]]
+; CHECK-NEON: [[SCALAR_PH]]:
+; CHECK-NEON-NEXT: br label %[[LOOP:.*]]
+; CHECK-NEON: [[EXIT:.*]]:
+; CHECK-NEON-NEXT: ret void
+; CHECK-NEON: [[LOOP]]:
+; CHECK-NEON-NEXT: [[IV:%.*]] = phi i64 [ 992, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEON-NEXT: [[RED:%.*]] = phi i32 [ [[TMP3]], %[[SCALAR_PH]] ], [ [[ADD_1:%.*]], %[[LOOP]] ]
+; CHECK-NEON-NEXT: [[GEP_SRC:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[IV]]
+; CHECK-NEON-NEXT: [[L:%.*]] = load i8, ptr [[GEP_SRC]], align 1
+; CHECK-NEON-NEXT: [[CONV8:%.*]] = sext i8 [[L]] to i32
+; CHECK-NEON-NEXT: [[ADD:%.*]] = add i32 [[RED]], [[CONV8]]
+; CHECK-NEON-NEXT: [[CONV8_1:%.*]] = sext i8 [[L]] to i32
+; CHECK-NEON-NEXT: [[ADD_1]] = add i32 [[ADD]], [[CONV8_1]]
+; CHECK-NEON-NEXT: [[GEP_DST:%.*]] = getelementptr i8, ptr [[DST]], i64 [[IV]]
+; CHECK-NEON-NEXT: store i32 [[ADD_1]], ptr [[DST]], align 4
+; CHECK-NEON-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1
+; CHECK-NEON-NEXT: [[EXITCOND:%.*]] = icmp eq i64 [[IV_NEXT]], 1000
+; CHECK-NEON-NEXT: br i1 [[EXITCOND]], label %[[EXIT]], label %[[LOOP]], !llvm.loop [[LOOP7:![0-9]+]]
+;
+; CHECK-LABEL: define void @chained_sext_adds(
+; CHECK-SAME: ptr noalias [[SRC:%.*]], ptr noalias [[DST:%.*]]) #[[ATTR1]] {
+; CHECK-NEXT: [[ENTRY:.*]]:
+; CHECK-NEXT: [[TMP0:%.*]] = call i64 @llvm.vscale.i64()
+; CHECK-NEXT: [[TMP1:%.*]] = shl nuw i64 [[TMP0]], 2
+; CHECK-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 1000, [[TMP1]]
+; CHECK-NEXT: br i1 [[MIN_ITERS_CHECK]], label %[[SCALAR_PH:.*]], label %[[VECTOR_PH:.*]]
+; CHECK: [[VECTOR_PH]]:
+; CHECK-NEXT: [[TMP2:%.*]] = call i64 @llvm.vscale.i64()
+; CHECK-NEXT: [[TMP3:%.*]] = mul nuw i64 [[TMP2]], 4
+; CHECK-NEXT: [[N_MOD_VF:%.*]] = urem i64 1000, [[TMP3]]
+; CHECK-NEXT: [[N_VEC:%.*]] = sub i64 1000, [[N_MOD_VF]]
+; CHECK-NEXT: br label %[[VECTOR_BODY:.*]]
+; CHECK: [[VECTOR_BODY]]:
+; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[VECTOR_BODY]] ]
+; CHECK-NEXT: [[VEC_PHI:%.*]] = phi <vscale x 4 x i32> [ zeroinitializer, %[[VECTOR_PH]] ], [ [[TMP7:%.*]], %[[VECTOR_BODY]] ]
+; CHECK-NEXT: [[TMP4:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[INDEX]]
+; CHECK-NEXT: [[WIDE_LOAD:%.*]] = load <vscale x 4 x i8>, ptr [[TMP4]], align 1
+; CHECK-NEXT: [[TMP5:%.*]] = sext <vscale x 4 x i8> [[WIDE_LOAD]] to <vscale x 4 x i32>
+; CHECK-NEXT: [[TMP6:%.*]] = add <vscale x 4 x i32> [[VEC_PHI]], [[TMP5]]
+; CHECK-NEXT: [[TMP7]] = add <vscale x 4 x i32> [[TMP6]], [[TMP5]]
+; CHECK-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], [[TMP3]]
+; CHECK-NEXT: [[TMP8:%.*]] = icmp eq i64 [[INDEX_NEXT]], [[N_VEC]]
+; CHECK-NEXT: br i1 [[TMP8]], label %[[MIDDLE_BLOCK:.*]], label %[[VECTOR_BODY]], !llvm.loop [[LOOP5:![0-9]+]]
+; CHECK: [[MIDDLE_BLOCK]]:
+; CHECK-NEXT: [[TMP9:%.*]] = call i32 @llvm.vector.reduce.add.nxv4i32(<vscale x 4 x i32> [[TMP7]])
+; CHECK-NEXT: store i32 [[TMP9]], ptr [[DST]], align 4
+; CHECK-NEXT: [[CMP_N:%.*]] = icmp eq i64 1000, [[N_VEC]]
+; CHECK-NEXT: br i1 [[CMP_N]], label %[[EXIT:.*]], label %[[SCALAR_PH]]
+; CHECK: [[SCALAR_PH]]:
+; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ [[N_VEC]], %[[MIDDLE_BLOCK]] ], [ 0, %[[ENTRY]] ]
+; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[TMP9]], %[[MIDDLE_BLOCK]] ], [ 0, %[[ENTRY]] ]
+; CHECK-NEXT: br label %[[LOOP:.*]]
+; CHECK: [[EXIT]]:
+; CHECK-NEXT: ret void
+; CHECK: [[LOOP]]:
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[RED:%.*]] = phi i32 [ [[BC_MERGE_RDX]], %[[SCALAR_PH]] ], [ [[ADD_1:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[GEP_SRC:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[IV]]
+; CHECK-NEXT: [[L:%.*]] = load i8, ptr [[GEP_SRC]], align 1
+; CHECK-NEXT: [[CONV8:%.*]] = sext i8 [[L]] to i32
+; CHECK-NEXT: [[ADD:%.*]] = add i32 [[RED]], [[CONV8]]
+; CHECK-NEXT: [[CONV8_1:%.*]] = sext i8 [[L]] to i32
+; CHECK-NEXT: [[ADD_1]] = add i32 [[ADD]], [[CONV8_1]]
+; CHECK-NEXT: [[GEP_DST:%.*]] = getelementptr i8, ptr [[DST]], i64 [[IV]]
+; CHECK-NEXT: store i32 [[ADD_1]], ptr [[DST]], align 4
+; CHECK-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1
+; CHECK-NEXT: [[EXITCOND:%.*]] = icmp eq i64 [[IV_NEXT]], 1000
+; CHECK-NEXT: br i1 [[EXITCOND]], label %[[EXIT]], label %[[LOOP]], !llvm.loop [[LOOP6:![0-9]+]]
+;
+entry:
+ br label %loop
+
+exit: ; preds = %loop
+ ret void
+
+loop: ; preds = %loop, %entry
+ %iv = phi i64 [ 0, %entry ], [ %iv.next, %loop ]
+ %red = phi i32 [ 0, %entry ], [ %add.1, %loop ]
+ %gep.src = getelementptr i8, ptr %src, i64 %iv
+ %l = load i8, ptr %gep.src, align 1
+ %conv8 = sext i8 %l to i32
+ %add = add i32 %red, %conv8
+ %conv8.1 = sext i8 %l to i32
+ %add.1 = add i32 %add, %conv8.1
+ %gep.dst = getelementptr i8, ptr %dst, i64 %iv
+ store i32 %add.1, ptr %dst, align 4
+ %iv.next = add i64 %iv, 1
+ %exitcond = icmp eq i64 %iv.next, 1000
+ br i1 %exitcond, label %exit, label %loop
+}
+
attributes #0 = { "target-cpu"="grace" }
|
a6eff62 to
44075dd
Compare
🐧 Linux x64 Test Results
|
SamTebbs33
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
| for (const auto &[Phi, Chains] : ChainsByPhi) { | ||
| for (const auto &[Chain, Scale] : Chains) { | ||
| auto AllUsersPartialRdx = [ScaleVal = Scale, this](const User *U) { | ||
| auto *UI = cast<Instruction>(U); | ||
| if (isa<PHINode>(UI) && UI->getParent() == OrigLoop->getHeader()) { | ||
| return all_of(UI->users(), [ScaleVal, this](const User *U) { | ||
| auto *UI = cast<Instruction>(U); | ||
| return ScaledReductionMap.lookup_or(UI, 0) == ScaleVal; | ||
| }); | ||
| } | ||
| return ScaledReductionMap.lookup_or(UI, 0) == ScaleVal || | ||
| !OrigLoop->contains(UI->getParent()); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code is quite dizzying and I'm a bit puzzled why it is needed to go through the users() of Chain.Reduction and then each of its users(), where I would think this information is already encoded in the ChainsByPhi map itself.
My understanding is that ChainsByPhi is a map of PHI -> [ <chain, scale>, <chain, scale>, ... ] records, where each chain is an operation in that particular reduction chain (defined by its own extend instructions, extend-user (e.g. a mul or another operand in the chain) and the reduction op itself (e.g. add or sub)). If so, then why would it not be enough to check that all <chain, scale> has the same value for scale?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AllUsersPartialRdx should be the same as before this patch, but we need an extra loop level.
It checks both if the chain is complete, without any gaps/missing links (all in loop users must also be in the map) and that their scale factor match. The latter could be checked by simply iterating over all entries in the map, but the former unfortunately cannot currently. llvm/test/Transforms/LoopVectorize/AArch64/partial-reduce-incomplete-chains.ll has a few tests with chains with missing links which need the first check.
But it is now no longer necessary to iterate over all users of phis, we can simply check if it is the phi for the chain, thanks!
llvm#162822 added another validation step to check if entries in a partial reduction chain have the same scale factor. But the validation was still dependent on the order of entries in PartialReductionChains, and would fail to reject some cases (e.g. if the first first link matched the scale of the second link, but the second link is invalidated later). To fix that, group chains by their starting phi nodes, then perform the validation for each chain, and if it fails, invalidate the whole chain for the phi. Fixes llvm#167243. Fixes llvm#167867.
4e3aad4 to
9d7e0c2
Compare
llvm/llvm-project#162822 added another validation step to check if entries in a partial reduction chain have the same scale factor. But the validation was still dependent on the order of entries in PartialReductionChains, and would fail to reject some cases (e.g. if the first first link matched the scale of the second link, but the second link is invalidated later). To fix that, group chains by their starting phi nodes, then perform the validation for each chain, and if it fails, invalidate the whole chain for the phi. Fixes llvm/llvm-project#167243. Fixes llvm/llvm-project#167867. PR: llvm/llvm-project#168036
llvm/llvm-project#162822 added another validation step to check if entries in a partial reduction chain have the same scale factor. But the validation was still dependent on the order of entries in PartialReductionChains, and would fail to reject some cases (e.g. if the first first link matched the scale of the second link, but the second link is invalidated later). To fix that, group chains by their starting phi nodes, then perform the validation for each chain, and if it fails, invalidate the whole chain for the phi. Fixes llvm/llvm-project#167243. Fixes llvm/llvm-project#167867. PR: llvm/llvm-project#168036 Signed-off-by: Hafidz Muzakky <[email protected]>
llvm#162822 added another validation step to check if entries in a partial reduction chain have the same scale factor. But the validation was still dependent on the order of entries in PartialReductionChains, and would fail to reject some cases (e.g. if the first first link matched the scale of the second link, but the second link is invalidated later). To fix that, group chains by their starting phi nodes, then perform the validation for each chain, and if it fails, invalidate the whole chain for the phi. Fixes llvm#167243. Fixes llvm#167867. PR: llvm#168036
llvm#162822 added another validation step to check if entries in a partial reduction chain have the same scale factor. But the validation was still dependent on the order of entries in PartialReductionChains, and would fail to reject some cases (e.g. if the first first link matched the scale of the second link, but the second link is invalidated later). To fix that, group chains by their starting phi nodes, then perform the validation for each chain, and if it fails, invalidate the whole chain for the phi. Fixes llvm#167243. Fixes llvm#167867. PR: llvm#168036
#162822 added another validation step to check if entries in a partial reduction chain have the same scale factor. But the validation was still dependent on the order of entries in PartialReductionChains, and would fail to reject some cases (e.g. if the first first link matched the scale of the second link, but the second link is invalidated later).
To fix that, group chains by their starting phi nodes, then perform the validation for each chain, and if it fails, invalidate the whole chain for the phi.
Fixes #167243.
Fixes #167867.